tv Discussion on ISIS and Social Media CSPAN May 19, 2016 5:08am-6:26am EDT
documentaries at student cam.org. now a discussion on how isis uses the internet and social media to recruit supporters. security experts testified before the congressional internet kau sus advisory committee on the federal effort to come bath the terror group online. this is just over an hour. good afternoon, everyone. we're going to get started. welcome to this panel, disrupting isis online, the challenges of come bathing online radicalization.
we're hosted by the congressional internet caucus. we would like to thank the cochairs for hosting us here today. we have the caucus host events every few weeks on salient topic to the internet and policy and we invite you to come out for events coming up throughout the summer. so today we have several excellent panelists with us today. we have emma llanso, works on the free expression project, we have rashad hugh seine and we have seamus hughes. and my name is miranda bogen and i'm a fellow at the internet law and policy foundry and was a fellow at the congressional internet caucus in the past. so let's get started.
i'll just give a brief overview of the issue going on and then we'll jump into it and get into the real issue here with extremists online, what role do the platforms play in this and how -- what is the right way to be approaching the issue of dealing with extremist content online and recruitment for terrorist groups abroad. so as you may have seen going on, we have -- the social media platforms, like twitter and facebook have generally, especially in their early years, been quite in favor of leaving their platforms as places for free expression. they've been adamant supporter os f that. but gradually, especially over the past few years we've seen that being taken advantage of by groups like al shabaab in somalia, like al qaeda and then
we have the islamic state beginning to use the platforms more actively than that, bringing it to a totally different level. now the platforms are facing pressure on multiple sides, from governments here, governments abroad, from their user to do something more to take the content out of people's social feeds. you know, i's not something you want to see every day but also it's not something that we want -- this content is not something we want spreading around because it is generally effective in recruiting people to go abroad and join these causes. so why don't we turn to seamus who have been working on this. can you tell us, when did this start? how are the platforms being used? what are the groups doing? >> it started when the internet started. in the early ages when we looked at the terrorist groups oniline
it was the protected places. if you look at the number of individuals arrested for isis related charges in the u.s., it's 85 individuals since march 2014. the average age is 26. isis recruiters are going online to where their demographic is. that tends to be twitter. we've seen a shift moving back over to telegram and other platforms. but they clearly use the online environment in a way that's conducive for them to recruit. think of it in three ways. so they use it as grooming. so over the summer the program of extremist at george washington we did a six-month study of isis recruits online. we look at about 1,000 accounts on a daily basis. of those, you see them grooming online. so we watched a young woman from the midwest who had questions about her faith, an isis recruiter realized she was naive. a few weeks later he would slowly introduce the isis
narrative into the conversation. they're using it as spotter to try to recruit people. the other way they use it is logistical support. an individual, a 19-year-old kid from chicago, when he gets picked up at o'hare airport, him and his underage sibling, planning to go join isis. when they arrested him, they went through his stuff and realized he had four numbers and he received those numbers 0 people to call when he got to turkey through the contacts he made on twitter. it lowers the bar fn an individual to be able to meet a recruiter online. and the last way they do it with the devil on their shoulder. egging people on to do this. you also have to realize that the numbers pale in comparison to any other form of conversation online. you're talking about 44,000 twitter accounts for isis supporters. the english scene ask 1,000 to
3,000 online. and the last thing i would like to say, it's not like if twitter went away tomorrow we wouldn't have recruits joining. the fact that there's a physical space, a driver for people. twitter, places lake that help to facilitate the recruitment. but they're not the reason. it allows in the u.s. for, when you see at least the people that have been arrested, communities don't radicalize in america, individuals do. we don't have these pockets of radicalization like you would have in some european countries. here if you're trying to find a like-minded individual, you're usually trying to find it online. >> rashad, maybe you can tell us how the department of justice and the government is approaching this phenomenon and how you're working to combat. >> it's a threat we take seriously. our first priority is to protect the america people from attacks.
what we're seeing ice ill do online is use some sophisticated techniques. seamus talked about some of the approaches they've used. they've also done something different in previous groups in that they've adopted a crowd sources model through which they encourage anyone anywhere to go out and commit attacks against innocent people. part of the challenge we face as a government, we have to be successful 100% of the time. isis is overwhelmingly rejected. they're recruiting millions of people around the world. they reach out to an audience of 1.6 billion muslims and others. and even if they're successful in a minuscule number of those cases, you have a problem of 20,000, 30,000 foreign fighters. you have the problem of isis getting followers all around the world. they're adept at using different techniques, targeting different
audiences in multiple languages. they've tried to reach out to disinfected youth and offer a sense of prurp, belonging. they use a combination of strength and warmth that they try to lure recruits with, a sense of camaraderie. and as twisted as it sounds, they claim to be building something. so we've all stein the atrocities that they've broadcast around the world but they've also put out positive messaging, as mentioned, the themes of camaraderie and warmth. and they claim to be building something and they're calling people to building something which is in their conception, the caliphate. and so one of the realizations that we have as government is that there are multiple audiences and we have to be smart about using the right messengers to reach the right audiences. so government isn't always going to be the right messenger to
reach the various audiences we're trying to reach. roughly speaking, taking a look at the audiences, you have a class of people that are potentially thinking about joining ice until the short term. then you have the immediate influences around them, family, friends, peers, then you have a set of cultural influencers that can influence public generally and then you have kind of a general publics. so government may be more effective in the prevention space reaching out to people who haven't already bought into aspects of the propaganda or the ideology. but you really need specific audiences to reach, for example, the specific class of physicians ters. who are they going to listen to. perhaps they'll only listen to other extremists. and maybe those are extreme mists that are not violent extremists but people that are extreme in their views. that's not a role for the government to play.
who is the best audience to reach out to cultural influencers? what we tried to do in government is where possible, message ourself to the audiences which we think we can reach. some of the common themes that we've use rd to highlight isil's atrocities against muzly communities where they're also killing in big numbers, amplifying the stories of people who have defected from isil, highlights isil's battlefield losses. they have territory they can point to saying come and help us establish. we're pointing to the losses they're take in iraq and syria. we've tried to exfoez the living conditions and defectors have done some of that. and perhaps most importantly we think it's important to work, not just government but with partners to disseminate positive messages that make clear what the rest of us stand for, what the rest of the muslim communities stand for and to
highlight positive alternatives. someone says i have a problem with what's happening in syria and i want to do something about it, we've got to find other path for people to take that are constructive rather than destructive. >> so it sounds like we have the duel use of the internet both as a platform for recruitment but also as a platform for engagement on the other side. and we also see that the platforms are torn between taking down violent content and threatening content and on one hand leaving it up for intelligence purposes and on the other hand really trying to minimize what they're taking down so that they don't have to be the ones judging what is appropriate content and what is not. emma, can you tell us about the response we've seen from the companies and some of the concerns that they might be considering when they're asking to comment on how to approach this issue? >> sure.
yeah. so rob yobviously over the past and a half -- can you hear me now? clearly over the past year and a half we've seen a huge amount of scrutiny on major internet companies, the big social media platforms about how are they responding to the existence of so-called extremist content online. and it might help to describe just a little bit sort of the legal framework around speech online. what is it that enables the exchange of information and expression of opinions that we all enjoy. in the u.s. we've got both the strong protections of the first amendment for speech where we have, you know, very high standards for what is speech that the government can actually say is unlawful. kind of relevant issues in that context are, you know, is a comment a direct incitement to eminent lawless action or
violence. is it a true threat of violence or intended violence against another individual? but we don't generally have broad prohibitions against hate speech and there's no -- there's certainly no kind of definition of extremists content as a set of unlawful speech. so already we're in an environment where what exactly are we talking about, what sort of speech and content are we talking about, is unclear. what we've seen a lot of the companies do is, in trying to apply their terms of service which are kind of variable across platforms, as ways to remove content that gets reported to them. so internet companies, you know, hosts of our speech online are generally protected from any legal liability for speech that they are not themselves the author of. this is section 230 of the communications act that ensures
that if i, for example, tweet something defamatory about seamus, seamus can sue me of course because i'm the one who said the comment but he can't go and sue twitter about it. and this how law has been incredibly important to the, you know, amazing innovation we've seen with the internet and online platforms and also to supporting speech online. all of us depend on a number of different intermediaries being willing to host and transmit our spech, if your isp or social media provider could face legal liability for your speech, they would be very unlikely to be willing to let you speak. but also in that law, companies are protected from liability for their decision to remove speech. this is where we see companies developing terms of service where they set out the standards for what kind of speech they'll accept on their platforms and what they'll say is kind of a violation of their rules or
standards. and so a lot of the platforms have rules about hate speech, even though this is very often speech that's totally protected under the law in the u.s. they may still say that they don't want to host speech that is d t denigrating of a particular group or class. most have standards against direct threats or threats of violence. i believe facebook has a standard against dangerous organizations in particular by which they tend to mean terrorist organizations or organized crime. so we've seen kind of a range of different kinds of terms grow up on the different platforms over the years and companies then, in response to kind of user flags about speech that appears to violate their terms, will take a look at content and see, does this go too far, does this step over the line of what they've already described to be
acceptable or not acceptable on their platforms. >> so i'm interested to hear from the rest of the panel about this balance of sort of the opportunity of the internet as a platform to spread various different types of speech, positive speech, to keep track of what's ging on and sort of the desire to control the dangerous speech, the hate speech. in the research arena, how do you see that playing out? >> i'm dual hatted on this one. we have a fellow, jan burger, that looks at english language accounts over a month period to find out if takedown was effective or not. here's a takeaway. the takedown of e accounts were effective in terms of reducing the number of followers the person had when they came back,
particularly on g ll lly on twi. there's a built in system for resiliency in the system. an individual like terrence mcneill arrested for terrorism charges last fall. by the time he was arrested he wu lone wolf 21. he came back at lone wolf 8, 9, 10, there's an isis echo cham bertha has shoutout accounts. here's lone wolf 8, he used to be lone wolf 7, everyone follow him. there's a built-in system we know we're getting kicked off for violating terms of service but they're going to help other people to make sure they get on. from a research perspective you want more data. it's a balancing act on whether takedown is the necessary way. i tend to be more on the positive encounter and message than i am on takedown. >> yeah, we've been encouraged
by companies enforcing their terms of service. and you know, there's echo chambers out there in the violent extremists in the world where they're posting violent tweets and beheading videos. now where there may be some limited cases in which it can be help f helpful to companies. but for the most part i agree with seamus's view on it. now it's important again to remember that overwhelmingly isil is rejected around the world and there's a reason for that. it's because -- largely because of their own actions. and a lot of the atrocities that they're committing, the stories that have been told by people who have been impacted by isil and other groups, the stories of defectors, all of those are getting out through social media as well. and so i know we have perhaps
thousands of a percent of people who are targeted by isil have gone and joined. that's unacceptably high for all of us because we're trying to prevent any single attack from ever happening. but it's important to remember that these platforms also provide an opportunity to put on not just counter messaging but positive messaging that allow the rest of us, including muslim communities, to communicate what we stand for. >> that's really the risk of the overbroa content policy or particularly like increasing pressure on companies to strengthen their policies. make them so that more content can come down. is that it is this, you know, potentially vastly overbroad response to what ends up being, as seamus's research tends to indicate, it's a lot of one on one communications that end up driving the actual individual to commit an act of violence. and if you're trying to capture
one on one highly tailored direct conversations with a policy that's about taking down all of the speech that's sort of in the general area of discussing isis and terrorism and u.s. policy, you're throwing out a lot lot of baby with very little bath water. >> so that's a good segway. we've had some pressure from the u.s. government to ade additional liabilities for the platforms or at least to compel them to turn over certain information if they come across it or for government agencies to use certain information in their response. and we've also had more collaborative approaches with the summits between the administration, in the silicone valley and in california. what is your sense of the right way to approach this if the overbroad approach is just that?
>> so there have been some proposals in congress that would try to require internet companies to report a parent terrorist activity to the government if they identify it. and this kind of proposal is pretty concerning. there's not -- in the particular bills that have been proposed, there's no real definition of what terrorist activity right be. and what that sordid model would set up a huge incentive for all of our communications providers to heir on the side of precaution. i think kind of the result of that would be a huge amount of overreporting which is both incredibly concerning for individual civil lints, our right to civil privacy and our own communications and also not
really generating useful information for law enforcement. so i think it's very much more what rashad had been saying about the need to support the environment where the defectors or the journalists or the advocates who are out there countering the message that isis presents and providing their own, you know, kind of positive view points and positive ideas. we need to ensure that there are strong protections for free speech in place so that can happen. we unfortunately see there's reports from the journalists about the way that anti-terrorism laws in egypt and turkey, you know, countries that are allies in the fight against isis, are also using those anti-terrorism laws to put journalists in jail. that kind of overbroad approach that ends up constraining the speech of exactly those people
that we need to get different view points and message out there is a real risk. >> there's also kind of an interesting dynamic here. you can think about the government's amazing ability for convening. if i call ten social media providers and get them in a room, it's a hard pitch. if rashad does it, it's a different pitch. i think back to my days in government, i was in sacramento and i was talking to an e man. i said what are you going to do? he said i'm going to grab any phone an talk about why isis is wrong for the following reasons. that's great, sir but no one is going to watch that. it's ten minutes of you holding your phone. here you have a guy who wants to do the messages but he has no idea how to tag the videos so they pop up when the next video pops up. but the government has the ability to play the match make ner the situation that says, we
don't actually want to be anyone near this thing but here's somebody you may want to talk to. >> that's hugh we've tried to use or convening role, by bringing together the types of community leaders you mentioned, artis artists, people that are adept at using social media and the platform, advertising sector, silicone valley companies. after that our job is sustained communication to some extent. but realizing that government is not the best messenger, our job is to also step back and allow the creative people that know how to put out the best positive messaging and counter messaging to do their thing. there is evidence to indicate that we're making steady progress in this area. not only have the social media companies -- we've had cooperative relationships and discussions with, not only have we seen announcements, twitter's announcement that they've taken
down 125,000 isil-affiliated accounts. but we've also seen polling data indicating that larger and larger percentages of young arab populations are ruling out any possibility of joining isil. 80% of 18 to 24-year-olds in the arab world in 16 countries that were surveyed said they would never consider joining it. if you were to do a poll of the disapproval rating of isil in many of these countries, it's even higher. so a lot of attention is paid towards the small percentage and deservably so. but it's important to deep in mind there's a lot of good work being done largely outside of government to make sure that those that might be susceptible to isil don't fall prey to their message. >> i think that's a very important point too. when you lock at this, we're talking about a manageable number. the fbi director talks about
9,000 active investigations in every state. you can actually taylor your message to the 900 to 1,000 people. you can do one on one interventions online. you're never going to be able to deradicalize someone on line but you may have a seed of doubt and have a real life or off life conversation of how the person should come back in the fold. >> reaching that right target audience is a challenge. now if the numbers which you stated and we've talked about on this panel are approximately correct in terms of the number of people in the united states that might be susceptible to isil's ideology, you don't want to have a messaging campaign that sends the message that all muslim youth are vulnerable. oust just because some muslim
youth might face discrimination, that means they're susceptible to violent extremism. that's not the case. there's data that indicates that per capita they're at the same level or higher education level per capita higher income levels than people of other faiths. and so you don't want to have kind of a one size fits all mass messages approach to reach the aund yens that we've talked about. if you look at seamus's report in terms of the isil related arrest, there's a statistic in there that says that 40% of those that were arrested were recent converts to islam. sometimes there's a narrative out there because there's youth that have grown up alienated, muslim youth are joanly susceptible to isis recruitment.
the 40% of those that are recent converts didn't grow up as young muslims. we have to be careful how we message on this. muslim americans sitting at their dinner table every night are talking about the same issues as all other americans. just because they're muslims, isis is not the number one conversation point at the dinner table. in fact they're overwhelmingly rejecting the message that isil is putting out there. that's bourn out by the data that we see. >> messaging itself is one issue, like what do we say. but it sounds like targeting is equally as important. is there a role for internet platforms to help in advising how to go about that targeting or to prioritize certain content, a al gore rhythmically. is that from a speech perspective equally problematic as taking down content?
>> some of the things that we've seen from a couple of the big social media platforms have been much less about actually kind of affecting the main kind of content, facebook news feed, twitter feed or search results. they've clear about not wanting to change and start manipulating those displays of their product because of pressure from governments. that's the right call. that's the kind of overbearing government effect on, you know, our access to information and kind of what views and perspectives that are out there that i think would undermine a lot of the very good counter narrative that we see coming out. what we've seen some companies do is programs that they've had with a number of topics, but really focusing in on the
question of radicalization and extremism right now, where kind of in the advertising space that may appear alongside search engines on or on your facebook page sponsoring certain nonprofits to say that they can have -- they can have their message show up as an ad alongside related content. i think there is still some questions there about is there company getting too far into trying to promote certain ideas over others. we have a funny relationship with social media platforms where in a lot of ways we like it when content that we care about is displayed to us. we don't want to see 19 million baby foe foes if that's not what we're into. but also when it seems like that companies are taking a nonneutral or i'd logically motivated position, that can also make people feel really uncomfortable. a key part around all of this is
transparency. people are particularly uncomfortable when it's not clear where the motivation is coming from or where kind of -- how view points are trying to be shaped. the more we can hear from the companies, what are they doing, the more we can see kind of open public discussions about what government might be considering, what companies are considering you know as opposed to closed door meetings where we only get leaks of agendas and bits and pieces of anonymous reports in the news. the more transparent we can be about, now we, how are things being worked out and what influences are there. the more comfortable a lot of people will be. >> there was a lot of talk not so recently but before when the platform seemed to be doing a little less to combat that maybe they were actually happening but didn't want to talk about it. one being you don't want to show your cards to the people who are trying to game the system and put that content up.
and two, cooperating with the government, especially post the snowden revelations, was not necessarily desirable for the users. and my sense is we've seen a shift and users are now actually wanting to see more of that. is this something you've seen? do you think the trend of keeping the distance will start to evolve away from that and see public cooperation or do you see that continuing? >> i'd come back to the point about transparency. one takeaway we can have from the snowden revelations, you don't want to surprise people with the scope of what's going on. that creates a really strong backlash. it's our right as citizens to know how is our government, you know, affecting our environment for peach. how is your government, you know, influencing what access to information kind of in public do
we have. i think having these conversation more publicly is really important. which is not to say that, you know, necessarily we want really close coordination between governments and companies on this. i mean very much for the point i was really glad to hear you talk about that recognition of when government needs to step back. the worst thing would be to undermine the efforts of the people providing alternative view points because those people are sort of, you know, cast as being too close to the u.s. government and so discounted for that reason. >> i understand the sensitivities that you mentioned, miranda. but it's also, at the same time, social media companies are very clear about the fact that they don't want to have their platforms being used by terrorists to spread their something. and so there is a lot of basis for cooperation and we're seeing progress in that area. and i think that the trend is headed in the right direction, as you mentioned.
>> so given the sensitivities and given the sort of over broad approach that may not be the right way, what would be helpful from companies from civil societies, from the american people to helping combat this content in the right way, in a smart way? >> i think if you look -- when we did our report on isis in america, we talked to a number of muslim-american community leaders, religious leaders and they want to get online and engage, talk to a kid that i'm worried about and bring him back in the fold. but i'm worried if i do i'm going to get recognized at the airport. there is some level that organizations could provide, a policy or legal guidance on what's appropriate online and what's not. you know, i understand when i
engage with these individuals that i'm probably going to hit up against stuff. i understand the risk and i know the transparency in it. but to ask somebody from middle america to understand those nuances without a left and right latitude, that would be something that the government could provide relatively easily. >> and i think one contribution that companies can make in all of this, you know, in addition to all of the work that they're already doing, is even more improvement in appeals processes for when people have their content come down or their accounts deactivated. we know as they're focusing on trying to enforce their terms consistently, you know, mistakes happen, kind of the scale of content that gets posted and reviewed by companies every day is enormous. so there are going to be cases where the, you know, ten or 15 seconds of human review that makes the decision that an
account should come down heirs to far on the side of takedown. you may be losing countering voicing in that process. ensuring that there are ways that people -- generally in the way we look at it to make sure they're looked at not just with an eye on how to keep the most extreme or violent content off of a platform but make sure that the space for discussion and debate about that content and these issues more generally can still persist. >> and we can look into providing additional guidance in addition to what's there. for those doing the work of counter messaging, they shouldn't be in a position that they have to be concerned about being accused of providing material support. we look at all of those examples on a case-by-case basis. it's clear in cases where
someone is out there and they're trying to do the good work of countering the message rather than supporting what's l is seeing. >> we're on a bit of a tight schedule today. i want to open it up to questions from the audience so that all of the panels have a chance to address them. rashad has to run out. but if we have any final questions we can keep the remaining panelists. does anyone have anything they would like to ask? no? well, one other question i had, you know, there have been several lawsuits against the platforms for hosting this content, which they're immune to under the law. but can you explain a little more about -- do you think those cases will go anywhere? do you think they're just people jumping on the topic of the day? >> so generally the law is pretty clear.
there's no -- there's strong protections against holding platforms civilly liable for speech that their users post. so i think there have been a few cases where people are seeking damaging for the death of a loved one that they tried to ultimately tie back to content that had been posted on a social media network. and of course it's like -- it's always a really heartbreaking story. you can understand why a person is trying to find some restitution. but i think we need to be very careful about how broadly we would scope, kind of who is the proximate cause of, you know, of the death of somebody in a terrorist act. and i think trying to sweep online platforms in under a very broad idea of general liability for actions that are many steps removed from anything they're directly involved with is ultimately not going to succeed.
>> and i know the department of justice has sort of played with the idea of going after people who have sharing the content itself. is that something you're continuing to pursue? are you approaching the people who are maybe not promoting the content directly, they're not the recruiters but they're supporting it and sharing it? >> our approach in this area is governed by the first amendment. there's a lot of speech that is protected speech that we may not agree with but we're not prosecuting those cases. the cases which could be prosecuted are ones in which there have been specific threat or solicitation of crimes against particular individuals. seamus, you referenced one of the cases from ohio, the mcneill case. those are the cases we're talking about. >> given that this is such a live issue and such an important one because it's really affecting lives, even on whatever scale it's happening, it is very distressing, i think, to the public and to the
platforms who have having to deal with this and everyone working in it. what do you think the most important thing for congress to take away from this issue is, moving forward as they're thinking about how to either legislate or hold off on legislating or asking the companies for help? and maybe on the flip side, you know, any other parties involved. what is the most important thing we should be doing to continue the trend of individuals rejecting the something that isis is spreading online? >> it's very clear we're not going to kill out way out of the problem. we're not going to treat our way out of the problem. so we need to continue reaching the right audiences through the right messengers. and that requires, of course, not just the government but a whole range of actors. and i think we put into place now at the government level and working with society, a number of mechanisms by which we can get out the right counter
messaging, the right positive messaging and then the right positive alternatives. for young people as i spoke about in the beginning, maybe disinfected for whatever reason. they may see something happening across the world that they view as an injustice, they cannot sit still, u have to do something about it. we have to work together to find the mechanisms of the small population that may be attracted to isil's message. their message is overwhelmingly rejected already. we don't want to be reaching out in the name of reaching targeted kmaunts with overbroad tactics or messages that could paint entire groups as vulnerable or as a problem when we have a distinct audience that we're trying to reach through some actors. and then in the preventive space, general audiences that
we're trying to reach through to, a set of those actors or a different set of actors. >> and i would say too for congress and everyone to remember that the u.s. will be watched very closely for our responses to all of this, right? kind of the standard that we set and the model that we set can do either a lot of good or a lot of harm. and so fe with can keep it on the side of good, show that there are ways to pursue this fight against isis that don't involve broad based censorship, that don't try to play wham a mole with extremist content online, trying to avoid the stigmatizing effect of muslim communities and instead focus on showing how truly supporting our fundamental values of freedom of speech and a right to privacy can actually help us succeed in the fight. that's ultimately as a message of what does it mean to conduct
this fight from a position of democratic ideals will be much more convincing than an approach that kind of motivated by fear, looks to track down on more speech and put many more people under scrutiny by the government. >> i think i'm going to be contrarian for the sake of conversation. congress has the ability to have a large mega phone on this. so i don't believe there would be a summit given by the white house if it wasn't for congress constantly hammering social media companies to deal with the content. it's almost a forcing function. for two years they got beat up on the hill about videos of us soldiers being killed that were post bid a baghdad sniper. it's a balancing act.
i understand the freedom of speech issues. but congress with play a role in forcing the convening, as uncomfortable as that is. the default of the social media companies is very liberating. there is a disconnect between the families that we talk about in the lawsuit and the conversation online. i'm being contrarian, no? >> any last questions? >> yeah, we have a question from the audience. >> i just wanted to discuss the importance of counter messaging, developing this type of positive content. are there any empirical ways of measuring the success of that? you can see if someone is exp e exposed to the content but actually translating that to off line behavior and correlate that is pron tore deradicalizing.